Skip to content

OCPBUGS-51273: Don't crashloop for HAProxy init container #4963

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

cybertron
Copy link
Member

Previously we just crashlooped when the HAProxy init container failed, which is a normal, expected condition when HAProxy starts before CoreDNS. This is causing issues in CI because having a pod crash more than 3 times in a row is considered a failure. While it usually doesn't take that long for it to pass, we are hitting a weird timing issue during upgrades when the node is just about to reboot after MCO updates the pod definitions and it's taking longer than normal because ostree is updating the node at the same time.

Since this is just a case of everything behaving as expected, let's stop failing the pod for an expected situation. This change puts the api-int call in a loop so it will just run until coredns is ready and we'll never trigger any error reporting just because of harmless timing issues.

- What I did

- How to verify it

- Description for the changelog

Previously we just crashlooped when the HAProxy init container failed,
which is a normal, expected condition when HAProxy starts before
CoreDNS. This is causing issues in CI because having a pod crash
more than 3 times in a row is considered a failure. While it usually
doesn't take that long for it to pass, we are hitting a weird timing
issue during upgrades when the node is just about to reboot after
MCO updates the pod definitions and it's taking longer than normal
because ostree is updating the node at the same time.

Since this is just a case of everything behaving as expected, let's
stop failing the pod for an expected situation. This change puts
the api-int call in a loop so it will just run until coredns is
ready and we'll never trigger any error reporting just because of
harmless timing issues.
@openshift-ci-robot openshift-ci-robot added jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Mar 31, 2025
@openshift-ci-robot
Copy link
Contributor

@cybertron: This pull request references Jira Issue OCPBUGS-51273, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.0) matches configured target version for branch (4.19.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @jadhaj

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

Previously we just crashlooped when the HAProxy init container failed, which is a normal, expected condition when HAProxy starts before CoreDNS. This is causing issues in CI because having a pod crash more than 3 times in a row is considered a failure. While it usually doesn't take that long for it to pass, we are hitting a weird timing issue during upgrades when the node is just about to reboot after MCO updates the pod definitions and it's taking longer than normal because ostree is updating the node at the same time.

Since this is just a case of everything behaving as expected, let's stop failing the pod for an expected situation. This change puts the api-int call in a loop so it will just run until coredns is ready and we'll never trigger any error reporting just because of harmless timing issues.

- What I did

- How to verify it

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@cybertron
Copy link
Member Author

/retest-required

Not used in hypershift.

@cybertron
Copy link
Member Author

/retest-required

Copy link
Contributor

openshift-ci bot commented Apr 2, 2025

@cybertron: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-op-ocl 90907d5 link false /test e2e-gcp-op-ocl
ci/prow/bootstrap-unit 90907d5 link false /test bootstrap-unit
ci/prow/e2e-azure-ovn-upgrade-out-of-change 90907d5 link false /test e2e-azure-ovn-upgrade-out-of-change

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@dgoodwin
Copy link
Contributor

dgoodwin commented Apr 8, 2025

Test failures calmed but could return any time, would be great to get this merged!

@jluhrsen
Copy link

@cybertron do we want to get this in? was just checking up on Networking bugs with component-regression labels and ended up here. looks like @dgoodwin wants it too?

@cybertron
Copy link
Member Author

Yes, we still want this, but the hypershift job appears to be perma-failing right now so I'm holding off on a retest. I assume that is related to the ongoing ci infra outage.

@cybertron
Copy link
Member Author

/retest-required

Hypershift looks healthier now.

@mkowalski
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 24, 2025
@cybertron
Copy link
Member Author

/assign @dkhater-redhat

@djoshy
Copy link
Contributor

djoshy commented Apr 24, 2025

/approve

Copy link
Contributor

openshift-ci bot commented Apr 24, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cybertron, djoshy, mkowalski

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 24, 2025
@openshift-merge-bot openshift-merge-bot bot merged commit 6d96a78 into openshift:main Apr 24, 2025
15 of 18 checks passed
@openshift-ci-robot
Copy link
Contributor

@cybertron: Jira Issue OCPBUGS-51273: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-51273 has been moved to the MODIFIED state.

In response to this:

Previously we just crashlooped when the HAProxy init container failed, which is a normal, expected condition when HAProxy starts before CoreDNS. This is causing issues in CI because having a pod crash more than 3 times in a row is considered a failure. While it usually doesn't take that long for it to pass, we are hitting a weird timing issue during upgrades when the node is just about to reboot after MCO updates the pod definitions and it's taking longer than normal because ostree is updating the node at the same time.

Since this is just a case of everything behaving as expected, let's stop failing the pod for an expected situation. This change puts the api-int call in a loop so it will just run until coredns is ready and we'll never trigger any error reporting just because of harmless timing issues.

- What I did

- How to verify it

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-machine-config-operator
This PR has been included in build ose-machine-config-operator-container-v4.20.0-202504250541.p0.g6d96a78.assembly.stream.el9.
All builds following this will include this PR.

@openshift-merge-robot
Copy link
Contributor

Fix included in accepted release 4.19.0-0.nightly-2025-04-29-095709

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants